29,761 research outputs found

    Global existence and optimal decay estimates of strong solutions to the compressible viscoelastic flows

    Full text link
    This paper is dedicated to the global existence and optimal decay estimates of strong solutions to the compressible viscoelastic flows in the whole space Rn\mathbb{R}^n with any n≥2n\geq2. We aim at extending those works by Qian \& Zhang and Hu \& Wang to the critical LpL^p Besov space, which is not related to the usual energy space. With aid of intrinsic properties of viscoelastic fluids as in \cite{QZ1}, we consider a more complicated hyperbolic-parabolic system than usual Navier-Stokes equations. We define "\emph{two effective velocities}", which allows us to cancel out the coupling among the density, the velocity and the deformation tensor. Consequently, the global existence of strong solutions is constructed by using elementary energy approaches only. Besides, the optimal time-decay estimates of strong solutions will be shown in the general LpL^p critical framework, which improves those decay results due to Hu \& Wu such that initial velocity could be \textit{large highly oscillating}.Comment: 44page

    Learning Convolutional Neural Networks using Hybrid Orthogonal Projection and Estimation

    Full text link
    Convolutional neural networks (CNNs) have yielded the excellent performance in a variety of computer vision tasks, where CNNs typically adopt a similar structure consisting of convolution layers, pooling layers and fully connected layers. In this paper, we propose to apply a novel method, namely Hybrid Orthogonal Projection and Estimation (HOPE), to CNNs in order to introduce orthogonality into the CNN structure. The HOPE model can be viewed as a hybrid model to combine feature extraction using orthogonal linear projection with mixture models. It is an effective model to extract useful information from the original high-dimension feature vectors and meanwhile filter out irrelevant noises. In this work, we present three different ways to apply the HOPE models to CNNs, i.e., {\em HOPE-Input}, {\em single-HOPE-Block} and {\em multi-HOPE-Blocks}. For {\em HOPE-Input} CNNs, a HOPE layer is directly used right after the input to de-correlate high-dimension input feature vectors. Alternatively, in {\em single-HOPE-Block} and {\em multi-HOPE-Blocks} CNNs, we consider to use HOPE layers to replace one or more blocks in the CNNs, where one block may include several convolutional layers and one pooling layer. The experimental results on both Cifar-10 and Cifar-100 data sets have shown that the orthogonal constraints imposed by the HOPE layers can significantly improve the performance of CNNs in these image classification tasks (we have achieved one of the best performance when image augmentation has not been applied, and top 5 performance with image augmentation).Comment: 7 Pages, 5 figures, submitted to AAAI 201

    Towards Optimal Adaptive Wireless Communications in Unknown Environments

    Full text link
    Designing efficient channel access schemes for wireless communications without any prior knowledge about the nature of environments has been a very challenging issue, especially when the channel states distribution of all spectrum resources could be entirely or partially stochastic and/or adversarial at different time and locations. In this paper, we propose an adaptive channel access algorithm for wireless communications in unknown environments based on the theory of multi-armed bandits (MAB) problems. By automatically tuning two control parameters, i.e., learning rate and exploration probability, our algorithms are capable of finding the optimal channel access strategies and achieving the almost optimal learning performance over time under our defined four typical regimes for general unknown environments, e.g., the stochastic regime where channels follow some unknown i.i.d process, the adversarial regime where all channels are suffered by adversarial jamming attack, the mixed stochastic and adversarial regime where a subset of channels are attacked and the contaminated stochastic regime where occasionally adversarial events contaminate the stochastic channel process, etc. To reduce the implementation time and space complexity, we further develop an enhanced algorithm by exploiting the internal structure of the selection of channel access strategy. We conduct extensive simulations in all these regimes to validate our theoretical analysis. The quantitative performance studies indicate the superior throughput gain and the flexibility of our algorithm in practice, which is resilient to both oblivious and adaptive jamming attacks with different intelligence and any attacking strength that ranges from no-attack to the full-attack of all spectrum resources.Comment: accepted, and to appear in IEEE transactions on Wireless Communication

    A Deep Learning Based Fast Image Saliency Detection Algorithm

    Full text link
    In this paper, we propose a fast deep learning method for object saliency detection using convolutional neural networks. In our approach, we use a gradient descent method to iteratively modify the input images based on the pixel-wise gradients to reduce a pre-defined cost function, which is defined to measure the class-specific objectness and clamp the class-irrelevant outputs to maintain image background. The pixel-wise gradients can be efficiently computed using the back-propagation algorithm. We further apply SLIC superpixels and LAB color based low level saliency features to smooth and refine the gradients. Our methods are quite computationally efficient, much faster than other deep learning based saliency methods. Experimental results on two benchmark tasks, namely Pascal VOC 2012 and MSRA10k, have shown that our proposed methods can generate high-quality salience maps, at least comparable with many slow and complicated deep learning methods. Comparing with the pure low-level methods, our approach excels in handling many difficult images, which contain complex background, highly-variable salient objects, multiple objects, and/or very small salient objects.Comment: arXiv admin note: substantial text overlap with arXiv:1505.0117

    Sigmoid-Based Refined Composite Multiscale Fuzzy Entropy and t-Distributed Stochastic Neighbor Embedding Based Fault Diagnosis of Rolling Bearing

    Full text link
    Multiscale fuzzy entropy (MFE) has been a prevalent tool to quantify the complexity of time series. However, it is extremely sensitive to the predetermined parameters and length of time series and it may yield an inaccurate estimation of entropy or cause undefined entropy when the length of time series is too short. In this paper the Sigmoid-based refined composite multiscale fuzzy entropy (SRCMFE) is introduced to improve the robustness of complexity measurement of MFE for short time series analysis. Also SRCMFE is used to quantify the dynamical properties of mechanical vibration signals and based on that a new rolling bearing fault diagnosis approach is proposed by combining SRCMFE with t-distributed stochastic neighbor embedding (t-SNE) for feature dimension and variable predictive models based class discrimination (VPMCD) for mode classification. In the proposed method, SRCMFE firstly is employed to extract the complexity characteristic from vibration signals of rolling bearing and t-SNE for feature dimension reduction is utilized to obtain a low dimensional manifold characteristic. Then VPMCD is employed to construct a multi-fault classifier to fulfill an automatic fault diagnosis. Finally, the proposed approach is applied to experimental data of rolling bearing and the results indicate that the proposed method can effectively distinguish different fault categories of rolling bearings

    Almost isometries between Teichm\"uller spaces

    Full text link
    We prove that the Teichm\"uller space of surfaces with given boundary lengths equipped with the arc metric (resp. the Teichm\"uller metric) is almost isometric to the Teichm\"uller space of punctured surfaces equipped with the Thurston metric (resp. the Teichm\"uller metric).Comment: 20 pages, 5 figures. All comments are welcome

    The study of a new gerrymandering methodology

    Full text link
    This paper is to obtain a simple dividing-diagram of the congressional districts, where the only limit is that each district should contain the same population if possibly. In order to solve this problem, we introduce three different standards of the "simple" shape. The first standard is that the final shape of the congressional districts should be of a simplest figure and we apply a modified "shortest split line algorithm" where the factor of the same population is considered only. The second standard is that the gerrymandering should ensure the integrity of the current administrative area as the convenience for management. Thus we combine the factor of the administrative area with the first standard, and generate an improved model resulting in the new diagram in which the perimeters of the districts are along the boundaries of some current counties. Moreover, the gerrymandering should consider the geographic features.The third standard is introduced to describe this situation. Finally, it can be proved that the difference between the supporting ratio of a certain party in each district and the average supporting ratio of that particular party in the whole state obeys the Chi-square distribution approximately. Consequently, we can obtain an archetypal formula to check whether the gerrymandering we propose is fair.Comment: 23 pages,15 figures, 2007 American mathematical modeling contest "Meritorious Winner

    Deep Learning for Object Saliency Detection and Image Segmentation

    Full text link
    In this paper, we propose several novel deep learning methods for object saliency detection based on the powerful convolutional neural networks. In our approach, we use a gradient descent method to iteratively modify an input image based on the pixel-wise gradients to reduce a cost function measuring the class-specific objectness of the image. The pixel-wise gradients can be efficiently computed using the back-propagation algorithm. The discrepancy between the modified image and the original one may be used as a saliency map for the image. Moreover, we have further proposed several new training methods to learn saliency-specific convolutional nets for object saliency detection, in order to leverage the available pixel-wise segmentation information. Our methods are extremely computationally efficient (processing 20-40 images per second in one GPU). In this work, we use the computed saliency maps for image segmentation. Experimental results on two benchmark tasks, namely Microsoft COCO and Pascal VOC 2012, have shown that our proposed methods can generate high-quality salience maps, clearly outperforming many existing methods. In particular, our approaches excel in handling many difficult images, which contain complex background, highly-variable salient objects, multiple objects, and/or very small salient objects.Comment: 9 pages, 126 figures, technical repor

    The weak-field-limit solution for Kerr black hole in radiation gauge

    Full text link
    In this work we present the solution for a rotating Kerr black hole in the weak-field limit under the radiation gauge proposed by Chen and Zhu [Phys. Rev. D83, 061501(R) (2011)], with which the two physical components of the gravitational wave can be picked out exactly.Comment: Submitted to Eur. phys. J. Plus; Minor revisio

    The Chen-Ruan Cohomology of Almost Contact Orbifolds

    Full text link
    Comparing to the Chen-Ruan cohomology theory for the almost complex orbifolds, we study the orbifold cohomology theory for almost contact orbifolds. We define the Chen-Ruan cohomology group of any almost contact orbifold. Using the methods for almost complex orbifolds (see [2]), we define the obstruction bundle for any 3-multisector of the almost contact orbifolds and the Chen-Ruan cup product for the Chen-Ruan cohomology. We also prove that under this cup product the direct sum of all dimensional orbifold cohomology groups constitutes a cohomological ring. Finally we calculate two examples.Comment: 11 page
    • …
    corecore